HowTo: Maximize GitHub Copilot’s Code Understanding of our repositories

news
code
copilot
development
Author

Dario Airoldi

Published

June 29, 2025

Modified

November 17, 2025

This document analyzes some strategies we are using to enhance GitHub Copilot’s understanding of our (Diginsight) codebases.

Proper techniques significantly improve Copilot’s code generation and answers to code-related questions.

Understanding Official vs. Community Approaches

Before diving into specific strategies, it’s important to distinguish between official GitHub Copilot features and community-recommended patterns:

✅ Official GitHub Copilot Features

These are documented and supported by GitHub with specific behavior guarantees:

Feature Location Purpose
Prompt Files .github/prompts/*.prompt.md Reusable prompts that become commands in Copilot Chat
Repository Instructions .github/copilot-instructions.md Auto-applied guidance for all chat interactions
Path-Specific Instructions .github/instructions/*.instructions.md Guidance applied to specific file patterns via applyTo
Custom Agents .github/agents/*.agent.md Specialized AI personas (VS Code 1.106+ only)

⚠️ Community Patterns

These are widely used but not officially documented by GitHub. They rely on Copilot’s semantic search:

Pattern Common Location Purpose
Context Documentation .copilot/ or .copilot/context/ Project-specific reference materials
Component READMEs Throughout codebase Module and component documentation
Schema Documentation docs/schemas/ or similar Data model and API documentation
Architecture Docs docs/ or .copilot/architecture/ System design documentation

Key Difference: Official features provide guaranteed behavior (like prompt commands appearing in the UI), while community patterns rely on Copilot’s semantic search to find and use the documentation.

Recommendation: Use both approaches - official features for interactive capabilities, and well-organized documentation for comprehensive context.

Table of Contents

  1. Most Impactful Strategies
  2. Medium Impact
  3. Important but Less Direct Impact
  4. Best Practices Summary
  5. References

1. Most Impactful Strategies

These strategies have the most direct and immediate impact on GitHub Copilot’s ability to understand our code context and generate relevant suggestions.

1.1. Organize Workspace with Information for AI

What we can do: Structure our:

  • project files
  • documentation
  • code comments

to maximizes Copilot’s ability to understand our project architecture, patterns, and domain knowledge.

Why this improves Copilot understanding: the workspace organization is the foundation that enables all other AI understanding techniques. >When we organize our code comments, documentation placement, and AI-guidance files, we create a comprehensive information architecture that maximizes Copilot’s ability to understand our project’s context and domain knowledge.

Impact on suggestion relevance: Proper workspace organization has the highest impact because it provides the structural foundation for all other AI understanding techniques. When our workspace is organized for AI comprehension, Copilot can access and correlate information across multiple sources, leading to more contextually appropriate and architecturally sound suggestions.

1.1.1. Method Overview Comments

Provide domain context and decision-making rules to help Copilot understand our business logic:

// DOMAIN: This method handles period selection for data aggregation
// The aggregation range is determined based on the time period:
// - Today/Yesterday: Hourly aggregation
// - Week/Month: Daily aggregation
// - Semester/Year: Monthly aggregation
// The range directly affects data resolution and API performance
private (AggregationRange range, string format) ConfigurePeriodFormats(string periodType)
{
    // Implementation...
}

1.1.2. Strategic Code Comments

Focus on what Copilot cannot infer: While Copilot can generate standard XML documentation, it cannot understand our specific business rules, performance constraints, or domain-specific patterns. Focus comments on providing this unique context.

/// <summary>
/// Processes telemetry data batch with instrumentation and error handling
/// </summary>
/// <remarks>
/// BUSINESS RULE: Must maintain telemetry correlation across service boundaries
/// PERFORMANCE CONSTRAINT: Processing >1000 events/sec requires sampling (cost control)
/// DIGINSIGHT PATTERN: Always use ActivitySource for distributed tracing compliance
/// ANTI-PATTERN: Never log sensitive customer data in telemetry tags
/// </remarks>
public async Task<ProcessingResult> ProcessTelemetryDataAsync(TelemetryBatch batch)
{
    // BUSINESS LOGIC: Correlation ID required for cross-service tracing
    using var activity = _activitySource.StartActivity("ProcessTelemetryData");
    activity?.SetTag("batch.size", batch.Items.Count);
    
    try
    {
        // DOMAIN CONSTRAINT: Log structured data for observability dashboard
        _logger.LogInformation("Processing telemetry batch with {ItemCount} items", batch.Items.Count);
        
        var result = await _telemetryProcessor.ProcessAsync(batch);
        
        // COMPLIANCE: Success tracking required for SLA reporting
        activity?.SetTag("processing.success", true);
        return result;
    }
    catch (Exception ex)
    {
        // MONITORING: Error correlation essential for incident response
        activity?.SetStatus(ActivityStatusCode.Error, ex.Message);
        _logger.LogError(ex, "Failed to process telemetry batch");
        throw;
    }
}

1.1.3. Directive Comments

Add explicit directives that guide Copilot’s code generation with prefixes like “COPILOT”:

// COPILOT: Standard Diginsight telemetry instrumentation pattern for service methods
// PATTERN: Activity creation with proper scoping and error handling
// DEPENDENCIES: Requires ActivitySource, ILogger<T>
// PERFORMANCE: Minimal overhead - activity creation is lightweight
public async Task<ProcessingResult> ProcessTelemetryDataAsync(TelemetryRequest request)
{
    // COPILOT: Always start with activity creation before any processing
    using var activity = _activitySource.StartActivity("ProcessTelemetryData");
    activity?.SetTag("request.type", request.Type);
    
    // Method implementation...
}

1.1.4. AI Context Files

Create documentation and context files across the repository that help AI understanding and choices.

Prompt Files with YAML Frontmatter:

Prompt files (.prompt.md) in .github/prompts/ can include YAML frontmatter to configure behavior:

---
name: create-telemetry-service
description: Generate a telemetry service following Diginsight patterns
agent: ask
model: GPT-4
tools: ['codebase', 'fetch']
argument-hint: 'serviceName=MyService'
---

# Create Telemetry Service

Generate a new telemetry service with the following characteristics:

1. Use ActivitySource for distributed tracing
2. Include structured logging with ILogger<T>
3. Implement standard error handling patterns
4. Add proper XML documentation

## Service Name
{{serviceName}}

## Implementation Pattern
[Rest of prompt content...]

YAML Frontmatter Fields: - name: Command name (defaults to filename if omitted) - description: Shown in Copilot Chat UI when selecting prompts - agent: Chat mode (ask, edit, agent, or custom agent name) - model: Preferred LLM (e.g., GPT-4, GPT-3.5-Turbo) - tools: Restricted tool access (e.g., ['codebase', 'fetch']) - argument-hint: Suggests parameter format in the input field

Path-Specific Instructions with applyTo Patterns:

Instruction files in .github/instructions/ use applyTo for targeting:

---
description: C# coding standards for services
applyTo:
  - "src/Services/**/*.cs"
  - "**/Services/*Service.cs"
---

# Service Implementation Guidelines

All service classes must:
1. Implement dependency injection via constructor
2. Use ILogger<T> for structured logging
3. Include ActivitySource for distributed tracing
4. Follow async/await patterns consistently

Recommended Structure:

/MyProject/
├── .github/                           # Official GitHub Copilot locations
│   ├── prompts/                       # ✅ Official: Prompt files (VS Code & VS 17.10+)
│   │   ├── telemetry-service.prompt.md    # Note: .prompt.md extension required
│   │   ├── error-handling.prompt.md
│   │   ├── unit-tests.prompt.md
│   │   └── performance-optimization.prompt.md
│   ├── copilot-instructions.md        # ✅ Official: Repo-level instructions
│   ├── instructions/                  # ✅ Official: Path-specific instructions
│   │   ├── csharp-conventions.instructions.md
│   │   └── api-patterns.instructions.md
│   └── agents/                        # ✅ Official: VS Code 1.106+ custom agents
│       └── testing-specialist.agent.md
├── .copilot/                          # ⚠️ Community pattern (not official)
│   ├── architecture.md
│   ├── patterns.md
│   └── context/                       # Enhanced semantic search context
│       ├── dataschemas/               # Data structure documentation
│       ├── apis/                      # External API documentation  
│       ├── patterns/                  # Code patterns and examples
│       ├── workflows/                 # Business process flows
│       ├── guidelines/                # Development standards
│       └── examples/                  # Code examples and templates
├── src/
│   ├── docs/                          # Comprehensive documentation
│   │   ├── api-reference.md
│   │   └── domain-concepts.md
│   ├── Services/
│   │   ├── README.md                  # Services module overview
│   │   ├── TelemetryService/
│   │   │   ├── README.md              # TelemetryService specific docs
│   │   │   ├── TelemetryService.cs
│   │   │   └── ITelemetryService.cs
│   │   └── LoggingService/
│   │       ├── README.md              # LoggingService specific docs
│   │       └── LoggingService.cs
│   └── Models/
│       ├── README.md                  # Data models overview
│       └── TelemetryModels.cs
└── README.md                          # Project root documentation

Key Documentation Locations:

  1. .github/prompts/ - ✅ Official: Workspace prompt files with .prompt.md extension that become commands in Copilot Chat
  2. .github/copilot-instructions.md - ✅ Official: Repository-level instructions automatically applied to all chat requests
  3. .github/instructions/ - ✅ Official: Path-specific instruction files with applyTo glob patterns
  4. .github/agents/ - ✅ Official (VS Code only): Custom agent definitions for specialized workflows
  5. .copilot/ - ⚠️ Community Pattern: Project-specific context and documentation (not officially recognized but helps semantic search)
  6. README.md Files: Module and component-level documentation throughout the codebase
  7. src/docs/: Comprehensive technical documentation and API references

1.1.5. Repository-Level Documents

GitHub Copilot uses standardized locations within the .github/ directory for AI-enhanced development:

Official .github/ Locations (Supported by GitHub Copilot)
Location Purpose Support Level
.github/prompts/ Workspace prompt files with .prompt.md extension that become slash commands (/promptName in VS Code) or hashtag commands (#promptName in Visual Studio 17.10+). Each file can include YAML frontmatter for metadata. ✅ Official - VS Code & Visual Studio 17.10+
.github/copilot-instructions.md Repository-level custom instructions automatically applied to all chat requests when enabled. Provides coding standards, conventions, and project-specific guidance. ✅ Official - VS Code & Visual Studio 17.10+
.github/instructions/ Path-specific instruction files (.instructions.md) with YAML frontmatter using applyTo glob patterns (e.g., **/*.cs, docs/**) to target specific files or directories. ✅ Official - VS Code & Visual Studio 17.10+
.github/agents/ Custom agent definitions (.agent.md files) for VS Code 1.106+ and Copilot CLI. Not supported in Visual Studio, which uses a separate AGENTS.md mechanism. ✅ Official - VS Code 1.106+ only

Important: The .github/copilot/prompts/ structure mentioned in some community blogs is not officially supported. Prompt files must be placed directly in .github/prompts/ to be recognized by Copilot.

Community Pattern: .copilot/ Directory

The .copilot/ directory is a community-recommended pattern (not an official GitHub feature) for organizing project-specific context that enhances Copilot’s semantic search:

.copilot/
├── context/               # Domain-specific knowledge (community pattern)
│   ├── dataschemas/       # Data structure documentation
│   ├── apis/              # External API documentation
│   ├── patterns/          # Code patterns and examples
│   ├── workflows/         # Business process flows
│   └── guidelines/        # Development standards
├── architecture/          # System design documents
│   ├── component-diagrams.md
│   ├── data-flow.md
│   └── system-overview.md
├── troubleshooting/       # Common issues and solutions
│   ├── debugging-guide.md
│   └── common-errors.md
└── examples/              # Code examples and templates
    ├── service-templates/
    └── test-patterns/

Status: While not officially recognized by GitHub, Copilot’s semantic search indexes markdown files in your workspace, so well-organized documentation in .copilot/ can improve context awareness.

Aspect .github/ Locations .copilot/ Directory
Official Support ✅ Yes - Documented by GitHub ⚠️ No - Community pattern
Prompt Files ✅ Appear as commands in Copilot UI ❌ Not supported
Instructions ✅ Auto-applied when enabled ❌ Not auto-applied
Context Discovery ✅ Prioritized by Copilot ⚠️ Relies on semantic search
Best Use Team-shared configuration, prompts, instructions Project documentation, reference materials

Recommendation: Use .github/ locations for all official Copilot features (prompts, instructions, agents), and .copilot/ for supplementary documentation that helps semantic search.

Key Benefits:

  • Layered Context: Copilot accesses the most relevant documentation based on where we’re working
  • Easy Maintenance: Documentation stays close to the code it describes
  • Focused Information: Each file addresses specific concerns without overwhelming detail

1.1.6. Project Context Organization

What we can do: Leverage a well-organized project structure to centralize documentation, following both official GitHub Copilot features and community best practices for providing contextual information.

Why this improves Copilot understanding: A well-structured documentation hierarchy helps GitHub Copilot’s semantic search find relevant context when generating code suggestions. By organizing information logically and consistently, we make it easier for Copilot to locate and understand project-specific patterns, domain concepts, and architectural decisions.

Official vs. Community Approaches:

GitHub Copilot officially supports specific locations for interactive features: - ✅ .github/prompts/ for reusable prompt commands - ✅ .github/copilot-instructions.md for automatic instruction injection - ✅ .github/instructions/ for path-specific guidance

For supplementary documentation, teams commonly use: - ⚠️ .copilot/ or .copilot/context/ as community patterns - ⚠️ src/docs/ or docs/ for technical documentation - ⚠️ Component-level README.md files throughout the codebase

Recommendation: Combine both approaches - use official .github/ locations for Copilot features, and organize reference documentation in a way that supports semantic search.

Business Rules

  • projectId: Must be valid GUID, references ProjectEntity
  • activityId: W3C trace context format required
  • timestamp: UTC timezone, ISO 8601 format
  • level: Must be one of: Trace, Debug, Information, Warning, Error, Critical
  • tags: Key-value pairs, keys must follow OpenTelemetry semantic conventions

Common Query Patterns

-- Get recent events for a project
SELECT * FROM c 
WHERE c.projectId = @projectId 
  AND c.timestamp >= @startTime 
ORDER BY c.timestamp DESC

-- Count events by level
SELECT c.level, COUNT(1) as count 
FROM c 
WHERE c.projectId = @projectId 
GROUP BY c.level

Relationships

  • ProjectEntity: Many telemetry events belong to one project
  • UserSession: Events may be associated with user sessions via tags

Key Benefits of This Approach:

  1. Improved Code Generation: Copilot can suggest proper entity constructors, property assignments, and validation logic
  2. Accurate Query Suggestions: Common query patterns help Copilot suggest appropriate database operations
  3. Business Rule Enforcement: Documented constraints help Copilot generate validation code
  4. Consistent API Design: Schema knowledge leads to better REST API endpoint suggestions
  5. Reduced Documentation Lookup: Developers get context-aware suggestions without leaving their IDE

1.1.8. Component-Specific Documentation

Create targeted documentation files that provide domain-specific knowledge for individual components. While there’s no special .copilot.md extension recognized by GitHub, well-organized markdown documentation near components helps Copilot’s semantic search:

Effective Approaches:

  1. README.md files: Place detailed README files in component directories
  2. docs/ folders: Create component-specific documentation folders
  3. Inline comments: Use rich code comments with business context
  4. Path-specific instructions: Use .github/instructions/ with applyTo patterns

Example Component Documentation:

# Services/TelemetryService/README.md

## Telemetry Service Overview

### Database Configuration
- **Database**: diginsightdb (CosmosDB)
- **Collections**: 
  - `data` - Projects and entities
  - `data-sources` - Source type definitions

### Key Concepts
- **Project ID**: `12345678-0c85-4592-8396-3f3e8656ed03` = "Diginsight Sample Project"
- **Data Types**: Activity Events, Telemetry, Configuration
- **Period Aggregation**: Affects data granularity and API performance

### Design Patterns
- **Activity Sources**: Entry points for distributed tracing
- **Structured Logging**: Correlation with distributed traces
- **Batch Processing**: Optimization for >100 events

### Anti-Patterns to Avoid
- ❌ Dynamic activity names (creates high cardinality)
- ❌ Generic logger categories (reduces traceability)
- ❌ Logging sensitive data in telemetry tags
- ❌ Synchronous calls in hot paths

### Common Operations
```csharp
// Standard telemetry service method pattern
public async Task<Result> ProcessAsync(Request request)
{
    using var activity = _activitySource.StartActivity("ProcessData");
    activity?.SetTag("request.id", request.Id);
    
    try
    {
        _logger.LogInformation("Processing {RequestId}", request.Id);
        var result = await _processor.ProcessAsync(request);
        return result;
    }
    catch (Exception ex)
    {
        activity?.SetStatus(ActivityStatusCode.Error, ex.Message);
        _logger.LogError(ex, "Failed processing {RequestId}", request.Id);
        throw;
    }
}

**Path-Specific Instructions Alternative:**

For component-specific guidance that should always be applied, use `.github/instructions/`:

```markdown
# .github/instructions/telemetry-services.instructions.md
---
description: Telemetry service implementation guidelines
applyTo:
  - "**/TelemetryService/**/*.cs"
  - "**/Services/*Telemetry*.cs"
---

## Telemetry Service Guidelines

When working with telemetry services:

1. Always use ActivitySource for distributed tracing
2. Include structured logging with correlation IDs
3. Set activity status on errors
4. Use semantic tag names following OpenTelemetry conventions
5. Implement batch processing for >100 events

## Required Dependencies
- ILogger<T> for structured logging
- ActivitySource for distributed tracing
- IOptions<TelemetryConfig> for configuration

1.1.9. Architecture Documentation

Create comprehensive architecture documentation that provides system-level context:

# .copilot/architecture.md - Diginsight Telemetry System Architecture

## System Overview
Diginsight Telemetry is a distributed observability platform built on OpenTelemetry standards.

## Core Components
- **Activity Sources**: Distributed tracing entry points
- **Telemetry Processors**: Data transformation and enrichment
- **Export Pipeline**: Batching and transmission to observability backends
- **Configuration System**: Dynamic settings and sampling controls

## Component Interactions
```text
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│ Application     │───▶│ ActivitySource  │───▶│ Telemetry       │
│ Code            │    │ (Instrumentation)│    │ Processor       │
└─────────────────┘    └─────────────────┘    └─────────────────┘


┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│ Azure Monitor / │◀───│ Export Pipeline │◀───│ Batch Processor │
│ Other Backends  │    │ (OTLP)          │    │ (Sampling)      │
└─────────────────┘    └─────────────────┘    └─────────────────┘

Key Architectural Decisions

  • OpenTelemetry Standard: Vendor-neutral observability
  • Push-based Collection: Better performance than pull-based
  • Structured Logging: Correlation with distributed traces
  • Sampling Strategy: Cost control while maintaining visibility

1.1.10. Documentation Header Hierarchy

Use consistent header hierarchies to establish clear information structure:

# Diginsight Telemetry System
## Core Components
### Data Retrieval Flow
#### ProcessTelemetryDataAsync Method

This hierarchical structure helps Copilot understand the relationship between concepts and suggest code that follows the same organizational patterns.

1.1.11. README Integration Strategy

Create AI-friendly project documentation in our README files:

# Diginsight Telemetry - AI Development Guide

## For GitHub Copilot Users

This project uses OpenTelemetry for distributed tracing. Common patterns:

1. **Activity Creation**: Always use `_activitySource.StartActivity()`
2. **Error Handling**: Set activity status on exceptions
3. **Tagging**: Use semantic tag names following OpenTelemetry conventions
4. **Logging**: Correlate logs with activities using structured logging

## Quick Copilot Queries

- "Generate a new telemetry service following Diginsight patterns"
- "Add error handling to this method using our standard approach"
- "Create unit tests for telemetry methods with proper mocking"

1.1.12. External References Strategy

Important Limitation: Copilot cannot access external URLs, wikis, or online documentation during code generation. However, we can still reference them strategically:

# .copilot/architecture.md

## External Documentation References

### OpenTelemetry Standards
- **Specification**: https://opentelemetry.io/docs/specs/otel/
- **Key Patterns**: Trace context propagation, semantic conventions
- **Local Summary**: Always use W3C trace context headers for correlation

### Azure DevOps Wiki References  
- **Team Architecture Decisions**: https://dev.azure.com/ourorg/project/_wiki/wikis/Architecture
- **Key Decisions**: Service mesh adoption, database partitioning strategy
- **Local Summary**: Use event-driven patterns for telemetry aggregation

### Internal API Documentation
- **REST API Docs**: https://internal-docs.company.com/telemetry-api
- **Key Endpoints**: /api/telemetry/batch, /api/metrics/query
- **Local Summary**: Use batch endpoints for >100 events, single for real-time

Best Practice for External References:

  1. Include the link for human developers
  2. Summarize key information locally that Copilot can understand
  3. Extract essential patterns into our local documentation
  4. Copy critical code examples rather than linking to them

1.2. Semantic Naming for AI Understanding

What we can do: Use descriptive, hierarchical naming conventions for methods, classes, and variables that clearly convey intent and relationships within our codebase.

Why this improves Copilot understanding: Descriptive, hierarchical naming conventions help Copilot understand the intent and relationships within our codebase. When method names, class names, and variable names follow consistent patterns that convey meaning, Copilot can better predict what related code should look like and suggest appropriate completions.

Impact on suggestion relevance: Semantic naming enables Copilot to suggest code that follows our established patterns, maintains consistency across our codebase, and uses meaningful names that align with our domain terminology.

Use Descriptive, Hierarchical Naming

// GOOD: Clear intent and hierarchy
public class DiginsightTelemetryDataAggregationService
{
    public async Task<PeriodAggregatedData> GetHourlyAggregatedTelemetryData(...)
    public async Task<PeriodAggregatedData> GetDailyAggregatedTelemetryData(...)
}

// COPILOT understands this pattern and suggests similar methods
public class DiginsightLogAnalysisService  
{
    // Copilot will suggest: GetHourlyAggregatedLogData, GetDailyAggregatedLogData
}

Pattern-Based Method Names

// This comment helps Copilot understand the expected pattern
public async Task<ApiResponse<TelemetryMetrics>> GetHourlyTelemetryMetricsAsync(...)
// Copilot will now suggest GetDailyTelemetryMetricsAsync, GetWeeklyTelemetryMetricsAsync

// Domain context in variable names guides suggestions
var hourlyDataRetrieval = new TelemetryDataRetrieval();
// Copilot understands: dailyDataRetrieval, weeklyDataRetrieval should follow

1.3. Domain Concept Documentation

What we can do: Clearly define and document our domain-specific terminology, business concepts, and their relationships to help Copilot understand our business logic.

Why this improves Copilot understanding: Domain-specific terminology and concepts are crucial for Copilot to understand our business logic. When we clearly define terms like “Data Sources,” “Groups,” and “Activity Events,” Copilot can better understand the context of our code and suggest domain-appropriate solutions rather than generic programming patterns.

Impact on suggestion relevance: With clear domain concepts, Copilot can suggest variable names, method signatures, and logic flows that align with our business domain, making suggestions more meaningful and reducing the need for manual corrections.

Core Domain Concepts

# Domain Concepts

## Data Sources
Types of data being processed (Telemetry, Logs, etc.). Each source has its own metrics.

## Groups
Collections of entities with data measurements. Can be:
- Runtime Group: System-defined grouping
- Custom Group: User-defined collection of entities

## Data Types
Measurements tracked for each source:
- Activity Events: Application activity logs
- Telemetry: Performance and usage data
- Configuration: Dynamic settings

Example Usages

## Common Usage Patterns

### Period Selection

// Period selection affects the data aggregation range
switch (settings.Period)
{
    case "Today":
        // Uses hourly aggregation (24 data points)
        range = Diginsight.Common.Querying.AggregationRange.Hour;
        break;
    case "CurrentMonth":
        // Uses daily aggregation (28-31 data points)
        range = Diginsight.Common.Querying.AggregationRange.Day;
        break;
}

1.4. Code Patterns and Conventions

What we can do: Establish and document consistent code patterns, naming conventions, error handling approaches, and architectural patterns that should be applied throughout our codebase.

Why this improves Copilot understanding: Establishing and documenting code patterns and conventions helps Copilot understand our team’s preferred approaches to common programming tasks. This includes naming conventions, error handling patterns, logging strategies, and architectural patterns that should be consistently applied.

Impact on suggestion relevance: Clear code patterns and conventions ensure that Copilot’s suggestions follow our established standards, maintain consistency across the codebase, and adhere to our team’s best practices, resulting in code that fits seamlessly into our existing project structure.

Naming Conventions

# Coding Standards

## Activity Names
- Use hierarchical names: `Diginsight.Service.Method`
- Include operation type: `Diginsight.Data.Query`, `Diginsight.Http.Request`
- Avoid dynamic names that create high cardinality

## Logger Categories
- Use class-based categories: `ILogger<MyService>`
- For static contexts: `ILogger<Program>` or specific category names
- Avoid generic categories like "Application" or "System"

## Metric Names
- Use dot notation: `diginsight.request.duration`
- Include units: `diginsight.memory.bytes`, `diginsight.duration.milliseconds`
- Follow OpenTelemetry semantic conventions

Common Patterns

// Standard method instrumentation pattern
public async Task<Result> ProcessAsync(Request request)
{
    using var activity = _activitySource.StartActivity();
    activity?.SetTag("request.id", request.Id);
    
    try
    {
        _logger.LogInformation("Processing request {RequestId}", request.Id);
        
        var result = await DoWorkAsync(request);
        
        activity?.SetTag("result.status", "success");
        return result;
    }
    catch (Exception ex)
    {
        activity?.SetStatus(ActivityStatusCode.Error, ex.Message);
        _logger.LogError(ex, "Failed to process request {RequestId}", request.Id);
        throw;
    }
}

1.5. Effective Prompting Strategies

What we can do: Develop and document standardized prompting patterns that help developers leverage the documentation in our codebase when working with Copilot.

Why this improves Copilot understanding: While other strategies focus on making our codebase more understandable to Copilot, effective prompting strategies create a crucial feedback loop between our documentation and AI interaction. When developers know how to reference existing patterns, naming conventions, and architectural decisions in their prompts, Copilot can locate and apply this contextual information more effectively.

Impact on suggestion relevance: Strategic prompting dramatically improves Copilot’s ability to generate code that aligns with our established patterns and architectural constraints. By teaching developers how to prompt effectively, we create a multiplier effect that enhances the value of all our other documentation efforts.

Reference-Based Prompts

Train the team to reference specific documentation in prompts:

// Example prompt: "Create a telemetry method following the patterns in Services/TelemetryService/README.md"
// Example prompt: "Add error handling similar to ProcessTelemetryDataAsync"
// Example prompt: "Generate a service following the architecture described in docs/architecture.md"

Pattern-Driven Prompts

Create a “prompt dictionary” that developers can reference:

# Effective Copilot Prompts

## For Telemetry Instrumentation
- "Create a method that processes [data type] with standard Diginsight telemetry instrumentation"
- "Add proper activity tracking to this method following our distributed tracing pattern"

## For Error Handling
- "Implement our standard error handling pattern with proper activity status in this method"
- "Update this method to include structured logging with our telemetry correlation pattern"

Component-Specific Prompts

Document component-specific prompting patterns:

# Telemetry Service Prompts

When working with telemetry services, use these specific prompts:

- "Create a new telemetry processor that handles [data type] following our sampling pattern"
- "Implement a batch processing method for telemetry data that follows our performance guidelines"
- "Generate unit tests for this telemetry method with proper activity source mocking"

1.6. Code Annotations for AI Tools

What we can do: Implement a consistent annotation system specifically designed to guide AI tools like Copilot, using distinct markers that highlight patterns, constraints, and relationships.

Why this improves Copilot understanding: Standard comments are helpful, but specialized AI annotations create a targeted communication channel with Copilot. These annotations stand out from regular comments and provide structured guidance that Copilot can more easily identify and follow when generating suggestions.

Impact on suggestion relevance: AI-specific annotations dramatically improve Copilot’s ability to understand our code’s unique constraints, patterns, and relationships. They serve as clear signposts that help Copilot navigate our codebase and generate suggestions that align perfectly with our team’s expectations.

AI Directive Annotations

Create standardized AI directive annotations:

// @ai:pattern - This class follows the Repository pattern with CosmosDB integration
// @ai:constraint - This method must maintain correlation IDs across service boundaries
// @ai:relationship - This service depends on IDataSourceRepository and IEntityAdapter
public class DiginsightService
{
    // @ai:example - Standard constructor injection pattern
    public DiginsightService(IDataSourceRepository repo, IEntityAdapter adapter)
    {
        // Implementation...
    }

    // @ai:pattern - Standard telemetry method with proper instrumentation
    // @ai:performance - Batch operations for >100 items to reduce API calls
    public async Task<Result> ProcessDataAsync(Request request)
    {
        // Implementation...
    }
}

Schema Annotations

Add structured schema annotations to complex data models:

// @ai:schema - ProjectEntity schema
// @ai:property id - GUID format, globally unique identifier
// @ai:property name - User-visible project name, max 100 chars
// @ai:property type - Must be "project", used for CosmosDB querying
public class ProjectEntity
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Type { get; } = "project";
}

Relationship Annotations

Document component relationships with special annotations:

// @ai:depends-on - ILogger<T>, ActivitySource
// @ai:consumed-by - TelemetryService, MetricsService
// @ai:implements - OpenTelemetry.ActivitySource pattern
public class DiginsightActivitySource
{
    // Implementation...
}

2. Medium Impact

These strategies provide substantial improvements to Copilot’s understanding, particularly for code structure, patterns, and domain-specific implementations.

2.1. AI-Optimized Code Comments

What we can do: Structure our code comments specifically to provide AI with tactical context about implementation details, dependencies, and performance considerations.

Why this improves Copilot understanding: Strategic code comments provide tactical context that helps Copilot understand specific implementation details, dependencies, and performance considerations. While less impactful than architecture documentation, they provide valuable hints for method-level code generation.

Impact on suggestion relevance: AI-optimized comments help Copilot suggest code that follows our specific patterns and handles edge cases appropriately, though they primarily influence local code suggestions rather than system-wide architectural decisions.

Structured Comments for AI

Structure comments specifically for AI understanding:

// COPILOT: Standard Diginsight telemetry instrumentation pattern for service methods
// PATTERN: Activity creation with proper scoping and error handling
// DEPENDENCIES: Requires ActivitySource, ILogger<T>
// PERFORMANCE: Minimal overhead - activity creation is lightweight
public async Task<ProcessingResult> ProcessTelemetryDataAsync(TelemetryRequest request)
{
    // COPILOT: Standard Diginsight activity creation pattern
    using var activity = _activitySource.StartActivity("ProcessTelemetryData");
    activity?.SetTag("request.type", request.Type);
    
    try
    {
        _logger.LogInformation("Processing telemetry request {RequestId}", request.Id);
        var result = await _telemetryProcessor.ProcessAsync(request);
        activity?.SetTag("result.status", "success");
        return result;
    }
    catch (Exception ex)
    {
        activity?.SetStatus(ActivityStatusCode.Error, ex.Message);
        _logger.LogError(ex, "Failed to process telemetry request {RequestId}", request.Id);
        throw;
    }
}

2.2. Implementation Examples

What we can do: Provide concrete implementation examples that demonstrate our preferred patterns and coding styles, serving as templates for Copilot to follow.

Why this improves Copilot understanding: Concrete implementation examples show Copilot the preferred patterns and coding styles for our project. When Copilot sees how we handle period calculations or batch processing, it can suggest similar patterns for new functionality, maintaining consistency across our codebase.

Impact on suggestion relevance: Implementation examples serve as templates for Copilot to follow, ensuring that new code suggestions match our existing patterns, naming conventions, and architectural approaches, leading to more cohesive and maintainable code.

Period Helper Methods

Extract common period calculation code:

/// <summary>
/// Configures period formats and ranges based on the period type
/// </summary>
/// <param name="periodType">The selected period (Today, Yesterday, CurrentWeek, etc.)</param>
/// <param name="projectLocalNow">The current time in the project's local timezone</param>
/// <returns>Configuration for the selected period</returns>
private (Diginsight.Common.Querying.AggregationRange Range, string Format, string TitleFormat) 
    ConfigurePeriod(string periodType, DateTime projectLocalNow)
{
    switch (periodType)
    {
        case "Today":
        case "Yesterday":
            return (Diginsight.Common.Querying.AggregationRange.Hour, "HH:mm", "dd MMM yyyy - HH:mm");
        // Additional cases...
    }
}

Data Source Batch Processing

/// <summary>
/// Retrieves and processes all data sources for a project in a single batch operation
/// </summary>
/// <param name="projectId">The project ID</param>
/// <param name="group">The entity group to analyze</param>
/// <param name="dateRange">Date range for data retrieval</param>
/// <returns>Dictionary mapping source types to their data</returns>
private async Task<Dictionary<string, GroupAggregation<AggregateValues>>> 
    GetBatchData(Guid projectId, DiginsightClasses.IGroup group, DateRange dateRange)
{
    // Implementation that reduces API calls through batching
}

2.3. Data Model Documentation

What we can do: Document our data structures, relationships, and database schema to help Copilot understand how data flows through our application.

Why this improves Copilot understanding: Data models are the foundation of any application. When Copilot understands our data structures, relationships, and database schema, it can suggest appropriate CRUD operations, data transformations, and validation logic. This is especially critical for telemetry systems where data flows through multiple transformation stages.

Impact on suggestion relevance: With clear data model documentation, Copilot can suggest proper entity mappings, database queries, and data processing patterns that respect our schema constraints and business rules, reducing bugs and improving code quality.

Key Data Models

# Core Data Models

## DiginsightData
Primary response model with hierarchical structure:

- DiginsightData
  - DataQuality (validity warnings)
  - Detail
    - Chart[] (time series data)
    - DataGroup (aggregated values)
  - PeriodDate (date ranges)

Database Structure

# Database Structure

## Azure Cosmos DB (diginsightdb)
- **Collection: data**
  - Projects (key: id)
    - Example: "id": "12345678-0c85-4592-8396-3f3e8656ed03", "name": "Diginsight Sample Project"
  - Entities (partitioned by projectId)
  - Groups (custom entity groups)

## Sample Documents
```json
{
  "id": "12345678-0c85-4592-8396-3f3e8656ed03",
  "name": "Diginsight Sample Project",
  "type": "project",
  "_etag": "\"00000000-0000-0000-0000-000000000000\""
}

2.4. API and Interface Documentation

What we can do: Document our API contracts, method signatures, and interface boundaries to help Copilot understand system contracts and expected behaviors.

Why this improves Copilot understanding: Clear API and interface documentation helps Copilot understand the contracts and boundaries within our system. When Copilot knows the signatures, parameters, and expected behavior of our public APIs, it can suggest proper implementations and usage patterns.

Impact on suggestion relevance: API documentation enables Copilot to suggest code that correctly implements interfaces, respects method signatures, and follows our established patterns for API design and usage, reducing integration errors and improving code consistency.

2.5. Code Relationship Documentation

What we can do: Document how different classes, services, and components interact with each other, including dependency flows and architectural patterns.

Why this improves Copilot understanding: Understanding how different classes, services, and components interact is crucial for Copilot to suggest appropriate design patterns and architectural solutions. When Copilot knows that DiginsightService depends on IDataSourceRepository, it can suggest proper dependency injection patterns and interface implementations.

Impact on suggestion relevance: Clear relationship documentation enables Copilot to suggest code that respects our architecture, follows dependency flow patterns, and maintains proper separation of concerns, leading to more maintainable and consistent code suggestions.

Class Dependencies

# Component Relationships

## Service Layer
- DiginsightService 
  → IDataSourceRepository (CosmosDB data access)
  → IEntityAdapter (entity access)
  → IGroupAdapter (group management)

## Data Flow
1. Widget request → ProcessTelemetryDataAsync
2. Period calculation and group resolution
3. For each data source type:
   - Filter entities by data source
   - Fetch data
   - Process into chart format

Visual Documentation

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│ Data Sources    │────▶│ Entity Group    │────▶│ Data            │
│ (Telemetry,     │     │ (Custom or      │     │ (Activity,      │
│  Logs, etc.)    │     │  Runtime)       │     │  Telemetry)     │
└─────────────────┘     └─────────────────┘     └─────────────────┘

Performance Considerations

What we can do: Document performance constraints, bottlenecks, and optimization strategies that are specific to our application domain.

Why this improves Copilot understanding: Performance constraints and optimization strategies are critical context for code generation. When Copilot understands that ProcessTelemetryDataAsync has performance considerations for batch processing, it can suggest optimizations like parallel processing, sampling, or async patterns that address these specific telemetry concerns.

Impact on suggestion relevance: Performance documentation helps Copilot suggest code that considers scalability, efficiency, and resource usage from the start, preventing performance issues rather than requiring later optimization.

# Performance Considerations

## Current Bottlenecks
- ProcessTelemetryDataAsync processes telemetry batches with proper instrumentation
- Current load: ~20 calls/sec in production
- Each data source creates a new filtered group object

## Optimization Strategies
- Batch data source calls where possible
- Cache filtered groups for reuse
- Implement more efficient filtering to reduce API calls

3. Important but Less Direct Impact

These strategies provide foundational improvements that support overall code comprehension and long-term maintainability, though their impact on immediate code suggestions may be less direct.

3.1. Configuration and Environment Documentation

What we can do: Document our configuration structures, environment variables, and dependency injection patterns to help Copilot understand how our application behaves in different environments.

Why this improves Copilot understanding: Configuration is critical for understanding how an application behaves in different environments. When Copilot understands our configuration structure, environment variables, and dependency injection patterns, it can suggest code that properly handles configuration changes and environment-specific behavior.

Impact on suggestion relevance: Configuration documentation enables Copilot to suggest code that properly accesses configuration values, handles environment differences, and follows our established patterns for dependency injection and service registration.

Configuration Files Documentation

Document configuration patterns and environment-specific settings:

```markdown
# Configuration Management

## appsettings.json Structure
```json
{
  "Diginsight": {
    "Telemetry": {
      "DefaultLogLevel": "Information",
      "EnableDynamicLogging": true,
      "SamplingRate": 0.1
    },
    "OpenTelemetry": {
      "ServiceName": "Diginsight.Sample",
      "ServiceVersion": "1.0.0"
    }
  }
}

Environment Variables

  • DIGINSIGHT_LOG_LEVEL: Override default log level
  • DIGINSIGHT_SAMPLING_RATE: Control telemetry sampling
  • AZURE_CONNECTION_STRING: Azure Monitor connection

Dependency Injection Patterns

// Standard DI registration pattern for Diginsight Telemetry
services.AddDiginsightTelemetry(options =>
{
    options.ServiceName = "MyService";
    options.EnableConsoleLogging = true;
    options.EnableAzureMonitor = !isDevelopment;
});

3.2. Testing Strategy and Error Patterns

What we can do: Document common error scenarios, testing patterns, and expected behaviors to help Copilot suggest robust code that handles edge cases appropriately.

Why this improves Copilot understanding: Testing documentation and error patterns help Copilot understand expected behavior and common failure scenarios. This knowledge is crucial for suggesting robust code that handles edge cases and follows established testing patterns in our project.

Impact on suggestion relevance: With clear testing strategies and error patterns, Copilot can suggest code that includes appropriate error handling, follows our testing conventions, and anticipates common problems, leading to more reliable and testable code suggestions.

Common Error Scenarios

Document typical error patterns and their solutions:

# Common Error Patterns

## Configuration Errors
- Missing connection strings → Check appsettings.json
- Invalid service names → Verify OpenTelemetry configuration
- Permission issues → Check Azure RBAC settings

## Performance Issues
- High memory usage → Check sampling rates
- Slow telemetry → Review batch export settings
- Missing traces → Verify instrumentation setup

## Testing Patterns
- Use TestHost for integration tests
- Mock ILogger<T> for unit tests
- Use InMemoryExporter for telemetry validation

Test Examples

[Test]
public async Task Should_Generate_Telemetry_For_Method_Execution()
{
    // Arrange
    using var activity = ActivitySource.StartActivity("test-operation");
    
    // Act
    var result = await _service.ProcessDataAsync();
    
    // Assert
    Assert.IsNotNull(activity);
    Assert.AreEqual("test-operation", activity.DisplayName);
}

API and Interface Documentation

What we can do: Document public API contracts, method signatures, and interface behaviors to ensure Copilot suggests code that correctly implements our system’s contracts.

Why this improves Copilot understanding: Clear API and interface documentation helps Copilot understand the contracts and boundaries within our system. When Copilot knows the signatures, parameters, and expected behavior of our public APIs, it can suggest proper implementations and usage patterns.

Impact on suggestion relevance: API documentation enables Copilot to suggest code that correctly implements interfaces, respects method signatures, and follows our established patterns for API design and usage, reducing integration errors and improving code consistency.

Public API Contracts

# Public APIs

## IDiginsightTelemetryService
Primary interface for telemetry operations:

```csharp
public interface IDiginsightTelemetryService
{
    /// <summary>
    /// Starts a new activity with automatic telemetry collection
    /// </summary>
    /// <param name="activityName">Name of the activity</param>
    /// <param name="tags">Optional tags for the activity</param>
    /// <returns>Disposable activity that ends when disposed</returns>
    IDisposable StartActivity(string activityName, Dictionary<string, object>? tags = null);
    
    /// <summary>
    /// Logs structured data with telemetry correlation
    /// </summary>
    void LogStructured<T>(LogLevel level, string message, T data);
}

Extension Methods

// Common extension patterns for telemetry
public static class TelemetryExtensions
{
    public static IServiceCollection AddDiginsightTelemetry(
        this IServiceCollection services, 
        Action<DiginsightOptions> configure)
    {
        // Configuration logic
    }
}

3.3. Architecture Decision Records

What we can do: Create Architecture Decision Records (ADRs) that document why certain technical choices were made, providing historical context for architectural decisions.

Why this improves Copilot understanding: Architecture Decision Records (ADRs) provide context about why certain technical choices were made. This historical context helps Copilot understand not just what patterns to follow, but why they were chosen, enabling it to suggest solutions that align with our architectural philosophy and constraints.

Impact on suggestion relevance: ADRs help Copilot understand the reasoning behind architectural decisions, enabling it to suggest code that respects existing design choices, avoids previously rejected approaches, and aligns with our team’s architectural principles and trade-offs.

ADR Template

# ADR-001: Use OpenTelemetry for Distributed Tracing

## Status
Accepted

## Context
Need standardized observability across microservices with vendor-neutral approach.

## Decision
Adopt OpenTelemetry as the primary telemetry framework for Diginsight.

## Consequences
- **Positive**: Vendor-neutral, industry standard, rich ecosystem
- **Negative**: Learning curve, additional complexity in configuration
- **Neutral**: Migration effort from existing logging frameworks

## Implementation Notes
- Use OTLP exporters for data transmission
- Implement custom samplers for cost control
- Maintain backward compatibility with existing ILogger patterns

Key Decisions

Document major architectural choices:

# Architecture Decisions

## Telemetry Collection Strategy
- **Decision**: Use push-based telemetry with batching
- **Rationale**: Better performance, reduced network overhead
- **Trade-offs**: Slight delay in telemetry visibility

## Configuration Management
- **Decision**: Use strongly-typed configuration classes
- **Rationale**: Compile-time safety, better IntelliSense support
- **Implementation**: `IOptions<T>` pattern with validation

4. Best Practices Summary

Official GitHub Copilot Features (Prioritize These)

  1. Prompt Files (.github/prompts/*.prompt.md): Create reusable prompts with YAML frontmatter that become chat commands
  2. Repository Instructions (.github/copilot-instructions.md): Define project-wide coding standards and conventions
  3. Path-Specific Instructions (.github/instructions/*.instructions.md): Apply guidance to specific files using applyTo glob patterns
  4. Custom Agents (.github/agents/*.agent.md): Define specialized AI workflows (VS Code 1.106+ only)
  5. Proper File Extensions: Use .prompt.md (not .prompt) and .instructions.md for official features

Documentation Organization (Community Best Practices)

  1. Consistent Structure: Use clear, hierarchical document organization throughout your codebase
  2. Component READMEs: Place detailed README.md files in every major component directory
  3. Centralized Documentation: Use docs/ or .copilot/ for comprehensive reference materials
  4. Visual Representations: Include diagrams and flowcharts where relationships are complex
  5. Source vs. Generated: Place markdown documentation in source directories (Copilot reads markdown, not generated HTML)

Code-Level Documentation

  1. Domain Concepts: Define all business-specific terms and relationships clearly
  2. Strategic Comments: Add context-rich comments explaining business rules, not obvious code
  3. Directive Comments: Use prefixes like “COPILOT:”, “PATTERN:”, “ANTI-PATTERN:” to guide AI
  4. Data Model Documentation: Document database structures, schemas, and example records
  5. API Contracts: Clearly document public interfaces, parameters, and return types

Architectural Context

  1. Component Relationships: Show how classes and methods interconnect
  2. Architecture Decisions: Record why technical choices were made using ADRs
  3. Code Patterns and Conventions: Establish and document naming conventions and standards
  4. Implementation Examples: Show ideal code patterns and optimizations
  5. Performance Notes: Document current metrics and optimization opportunities

Supporting Documentation

  1. Configuration Documentation: Document all configuration files, environment variables, and setup patterns
  2. Error Pattern Documentation: Catalog common errors, their causes, and solutions
  3. Testing Strategy: Document test patterns, mocking strategies, and validation approaches
  4. External References: Link to external docs but summarize key information locally (Copilot can’t access URLs)
  5. Effective Prompting: Reference existing documentation and patterns in your Copilot queries

Critical Reminders

  • DO: Use .github/prompts/ for prompt files with .prompt.md extension
  • DO: Enable custom instructions in your IDE settings to use instruction files
  • DO: Use applyTo patterns in instruction files for targeted guidance
  • DON’T: Use .github/copilot/prompts/ - this location is not officially supported
  • DON’T: Expect Copilot to access external URLs, wikis, or databases
  • DON’T: Rely solely on generated HTML documentation - use markdown source files

5. References

Official GitHub Documentation

  • Using Prompt Files with GitHub Copilot

    Official GitHub documentation for creating reusable prompt files with .prompt.md extensions. Covers YAML frontmatter configuration, slash commands in VS Code, and hashtag commands in Visual Studio 17.10+.

  • Using Instruction Files with GitHub Copilot

    The authoritative guide to custom instructions, including repository-wide instructions (.github/copilot-instructions.md) and path-specific instructions (.github/instructions/*.instructions.md) with applyTo glob patterns.

  • Getting Code Suggestions in Your IDE with GitHub Copilot

    Official GitHub documentation covering how Copilot understands code context and generates suggestions. Includes specific guidance on improving code suggestions through comments, documentation, and code structure.

VS Code Documentation

Visual Studio Documentation

  • GitHub Copilot in Visual Studio

    Microsoft’s official documentation for using GitHub Copilot in Visual Studio 2022 (version 17.10+). Covers hashtag commands, prompt files support, and custom instructions configuration.

  • Prompt Files in Visual Studio

    Details on how Visual Studio implements prompt files with hashtag command syntax and differences from VS Code’s slash command approach.

Comprehensive Guides

OpenTelemetry and Observability

  • OpenTelemetry Semantic Conventions

    Comprehensive guide to documenting observability concepts using industry-standard semantic conventions. Essential for creating consistent, AI-understandable telemetry documentation.

  • .NET Observability with OpenTelemetry

    Microsoft’s official guide to implementing observability in .NET applications. Demonstrates telemetry code structure and documentation patterns for .NET systems.

Research and Best Practices


By implementing these specialized documentation practices - focusing on official GitHub Copilot features while leveraging community patterns for comprehensive context - our team can significantly enhance Copilot’s understanding of the Diginsight Telemetry system, leading to more accurate code suggestions that respect our domain-specific patterns and requirements.